Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications

نویسندگان

چکیده

Deep learning models have achieved high performance across different domains, such as medical decision-making, autonomous vehicles, decision support systems, among many others. However, despite this success, the inner mechanisms of these are opaque because their internal representations too complex for a human to understand. This opacity makes it hard understand how or why predictions deep models. There has been growing interest in model-agnostic methods that make more transparent and explainable humans. Some researchers recently argued machine achieve human-level explainability, needs provide causally understandable explanations, also known causability . A specific class algorithms potential counterfactuals paper presents an in-depth systematic review diverse existing literature on artificial intelligence (AI). We performed Latent Dirichlet topic modelling analysis (LDA) under Preferred Reporting Items Systematic Reviews Meta-Analyses (PRISMA) framework find most relevant articles. yielded novel taxonomy considers grounding theories surveyed algorithms, together with underlying properties applications real-world data. Our research suggests current counterfactual AI not grounded causal theoretical formalism and, consequently, cannot promote decision-maker. Furthermore, our findings suggest explanations derived from popular spurious correlations rather than cause/effects relationships, leading sub-optimal, erroneous, even biased explanations. Thus, advances new directions challenges promoting approaches AI. • survey XAI. set systems Opportunities based modelagnostic approaches.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Explainable Artificial Intelligence for Training and Tutoring

This paper describes an Explainable Artificial Intelligence (XAI) tool that allows entities to answer questions about their activities within a tactical simulation. We show how XAI can be used to provide more meaningful after-action reviews and discuss ongoing work to integrate an intelligent tutor into the XAI framework.

متن کامل

Building Explainable Artificial Intelligence Systems

As artificial intelligence (AI) systems and behavior models in military simulations become increasingly complex, it has been difficult for users to understand the activities of computer-controlled entities. Prototype explanation systems have been added to simulators, but designers have not heeded the lessons learned from work in explaining expert system behavior. These new explanation systems a...

متن کامل

Explainable Recommendation: Theory and Applications

Although personalized recommendation has been investigated for decades, the wide adoption of Latent Factor Models (LFM) has made the explainability of recommendations a critical issue to both the research community and practical application of recommender systems. For example, in many practical systems the algorithm just provides a personalized item recommendation list to the users, without per...

متن کامل

Explainable Artificial Intelligence via Bayesian Teaching

Modern machine learning methods are increasingly powerful and opaque. This opaqueness is a concern across a variety of domains in which algorithms are making important decisions that should be scrutable. The explainabilty of machine learning systems is therefore of increasing interest. We propose an explanation-byexamples approach that builds on our recent research in Bayesian teaching in which...

متن کامل

Automated Reasoning for Explainable Artificial Intelligence

Reasoning and learning have been considered fundamental features of intelligence ever since the dawn of the field of artificial intelligence, leading to the development of the research areas of automated reasoning and machine learning. This paper discusses the relationship between automated reasoning and machine learning, and more generally between automated reasoning and artificial intelligenc...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Information Fusion

سال: 2022

ISSN: ['1566-2535', '1872-6305']

DOI: https://doi.org/10.1016/j.inffus.2021.11.003